different type
Interview with AAAI Fellow Yan Liu: machine learning for time series
Each year the AAAI recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. Over the course of the next few months, we'll be talking to some of the 2026 AAAI Fellows . In this interview, we met with Yan Liu, University of Southern California, who was elected as a Fellow . We found out about how time series research has progressed, the vast range of applications, and what the future holds for this field. Could you start with a quick introduction to your area of research?
- North America > United States > California (0.55)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Singapore (0.04)
- Education > Educational Setting (0.49)
- Transportation (0.48)
- Health & Medicine (0.48)
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > China > Hubei Province > Wuhan (0.04)
- North America > Canada > Quebec > Montreal (0.04)
CalibrationofSharedEquilibriainGeneralSum PartiallyObservableMarkovGames
We consider a general sum partially observableMarkovgamewhere agents ofdifferent types share asingle policy network, conditioned on agent-specific information. This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena ofsuch equilibria toreal-worldtargets. Parameter sharing with decentralized execution has been introduced as an efficient way to train multiple agents using a single policy network.
- North America > United States > California (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > China > Fujian Province > Fuzhou (0.05)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Greece (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Unstructured pruning reduces the memory footprint in deep neural networks (DNNs). Recently, researchers proposed different types of structural pruning intending to reduce also the computation complexity. In this work, we first suggest a new measure called mask-diversity which correlates with the expected accuracy of the different types of structural pruning. We focus on the recently suggested N:M fine-grained block sparsity mask, in which for each block of M weights, we have at least N zeros. While N:M fine-grained block sparsity allows acceleration in actual modern hardware, it can be used only to accelerate the inference phase.
Uncertainty Aware Semi-Supervised Learning on Graph Data
Thanks to graph neural networks (GNNs), semi-supervised node classification has shown the state-of-the-art performance in graph data. However, GNNs have not considered different types of uncertainties associated with class probabilities to minimize risk of increasing misclassification under uncertainty in real life. In this work, we propose a multi-source uncertainty framework using a GNN that reflects various types of predictive uncertainties in both deep learning and belief/evidence theory domains for node classification predictions. By collecting evidence from the given labels of training nodes, the Graph-based Kernel Dirichlet distribution Estimation (GKDE) method is designed for accurately predicting node-level Dirichlet distributions and detecting out-of-distribution (OOD) nodes. We validated the outperformance of our proposed model compared to the state-of-the-art counterparts in terms of misclassification detection and OOD detection based on six real network datasets. We found that dissonance-based detection yielded the best results on misclassification detection while vacuity-based detection was the best for OOD detection. To clarify the reasons behind the results, we provided the theoretical proof that explains the relationships between different types of uncertainties considered in this work.